在本文中,我们提出了一种新的双通方法来统一一个模型中的流和非流媒体端到端(E2E)语音识别。我们的型号采用混合CTC /注意架构,其中编码器中的构装层被修改。我们提出了一种基于动态的块的注意力策略,以允许任意右上下文长度。在推理时间,CTC解码器以流式方式生成n最佳假设。只有更改块大小,可以轻松控制推理延迟。然后,CTC假设被注意力解码器重新筛选以获得最终结果。这种有效的备用过程导致句子级延迟非常小。我们在开放的170小时Aishell-1数据集上的实验表明,所提出的方法可以简单有效地统一流和非流化模型。在Aishell-1测试集上,与标准的非流式变压器相比,我们的统一模型在非流式ASR中实现了5.60%的相对字符错误率(CER)减少。同一模型在流式ASR系统中实现了5.42%的CER,640ms延迟。
translated by 谷歌翻译
由于其高营养,新鲜度和便利性,包装的新鲜生菜被广泛作为蔬菜沙拉的主要组成部分。然而,生菜剪切边缘的酶促褐变变色可显着降低产品质量和保质期。尽管正在进行许多研究和育种工作以最大程度地减少褐变,但由于缺乏评估褐变的快速和可靠的方法,进展受到阻碍。当前识别和量化褐变的方法过于主观,劳动密集型或不准确。在本文中,我们报告了一个用于生菜褐变预测的深度学习模型。据我们所知,这是使用鉴定的Siamese二次SWIN(SQ-SWIN)变压器具有多个亮点的暹罗二次Swin(SQ-SWIN)变压器的深度学习的首先。首先,我们的模型在变压器模型中包含二次特征,该模型比线性变压器更强大地结合现实世界的表示。其次,提出了一种多尺度培训策略来增强数据并探索更多的生菜图像的固有自相似性。第三,提出的模型使用暹罗体系结构,该体系结构了解有限培训样本之间的相互关系。第四,该模型在成像网上进行了预估计,然后使用爬行动物元学习算法进行训练,以学习高阶梯度,而不是常规梯度。新鲜切割的生菜数据集的实验结果表明,所提出的SQ-SWIN优于传统方法和其他基于深度学习的骨架。
translated by 谷歌翻译
本文介绍了一种新的普通话 - 英语代码转换语音识别的语料库 - 塔尔奇语料库,适用于培训和评估代码转换语音识别系统。滑石乐谱来自TAL教育小组中真正的在线在线一对一的英语教学场景,其中包含大约587个小时的语音采样16 kHz。据我们所知,滑石科目是世界上标签最大的普通话 - 英语密码开关开源自动语音识别(ASR)数据集。在本文中,我们将详细介绍录制过程,包括捕获设备和语料库环境的音频。并且滑石场可以根据允许许可证免费下载。我们使用滑石乐谱,在两个流行的语音识别工具包中进行ASR实验,以制造包括ESPNET和WENET在内的基线系统。在滑石粉中比较了两个语音识别工具包中的混合错误率(MER)性能。实验结果表明,音频记录和转录的质量是有希望的,基线系统是可行的。
translated by 谷歌翻译
及时调整是以参数有效的方式对预训练的预训练语言模型的新范式。在这里,我们探讨了超级核武器的使用来产生超预价:我们提出了HyperPrompt,这是一种用于迅速基于变形金刚自我注意的任务调节的新型体系结构。超预要是通过超网络通过一代人来学习的端到端。 HyperPrompt允许网络学习特定于任务的功能地图,其中超预告是要参与的查询的任务全局记忆,同时启用了任务之间的灵活信息共享。我们表明,HyperPrompt与强大的多任务学习基线具有竞争力,其额外的任务条件参数的$ 0.14 \%$ $ \%,实现了出色的参数和计算效率。通过广泛的经验实验,我们证明,超级启示可以比强大的T5多任务学习基准和参数效率高效的适配器变体获得卓越的性能,包括及时调整和SuplyFormer ++在许多模型尺寸的自然语言理解胶水和SuperGrue的基准上。
translated by 谷歌翻译
Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large scale road network traffic datasets and observe consistent improvement of 12% -15% over state-of-the-art baselines.
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译